360 research outputs found

    Systematically convergent method for accurate total energy calculations with localized atomic orbitals

    Full text link
    We introduce a method for solving a self consistent electronic calculation within localized atomic orbitals, that allows us to converge to the complete basis set (CBS) limit in a stable, controlled, and systematic way. We compare our results with the ones obtained with a standard quantum chemistry package for the simple benzene molecule. We find perfect agreement for small basis set and show that, within our scheme, it is possible to work with a very large basis in an efficient and stable way. Therefore we can avoid to introduce any extrapolation to reach the CBS limit. In our study we have also carried out variational Monte Carlo (VMC) and lattice regularized diffusion Monte Carlo (LRDMC) with a standard many-body wave function (WF) defined by the product of a Slater determinant and a Jastrow factor. Once the Jastrow factor is optimized by keeping fixed the Slater determinant provided by our new scheme, we obtain a very good description of the atomization energy of the benzene molecule only when the basis of atomic orbitals is large enough and close to the CBS limit, yielding the lowest variational energies.Comment: 22 pages, 6 figures, accepted in Physical Review

    Large-scale computing with Quantum ESPRESSO

    Get PDF
    This paper gives a short introduction to Quantum ESPRESSO: a distribution of software for atomistic simulations in condensed-matter physics, chemical physics, materials science, and to its usage in large-scale parallel computing

    Large-scale computing with Quantum ESPRESSO

    Get PDF
    This paper gives a short introduction to Quantum ESPRESSO: a distribution of software for atomistic simulations in condensed-matter physics, chemical physics, materials science, and to its usage in large-scale parallel computing

    European actions for High-Performance Computing: PRACE, DEISA and HPC-Europa

    Get PDF
    Between e-Infrastructures, ESFRI has identified in FP7 High- Performance Computing a strategic priority for Europe. In order to have the highest return from the associated massive economical and political commitment, HPC resources must be exploited at the highest level. Their access must be effective and capillary, their services must allow more and more people to use the resources, regardless their geographical location. Projects like PRACE, DEISA and HPC-Europa have been supported by the EU to cope with such issues providing the best solutions for the European research community

    An automatic procedure to forecast tephra fallout

    Get PDF
    Tephra fallout constitutes a serious threat to communities around active volcanoes. Reliable short-term forecasts represent a valuable aid for scientists and civil authorities to mitigate the effects of fallout on the surrounding areas during an episode of crisis. We present a platform-independent automatic procedure with the aim to daily forecast transport and deposition of volcanic particles. The procedure builds on a series of programs and interfaces that automate the data flow and the execution and subsequent postprocess of fallout models. Firstly, the procedure downloads regional meteorological forecasts for the area and time interval of interest, filters and converts data from its native format, and runs the CALMET diagnostic model to obtain the wind field and other micro-meteorological variables on a finer local-scale 3-D grid defined by the user. Secondly, it assesses the distribution of mass along the eruptive column, commonly by means of the radial averaged buoyant plume equations depending on the prognostic wind field and on the conditions at the vent (granulometry, mass flow rate, etc). All these data serve as input for the fallout models. The initial version of the procedure includes only two Eulerian models, HAZMAP and FALL3D, the latter available as serial and parallel implementations. However, the procedure is designed to incorporate easily other models in a near future with minor modifications on the model source code. The last step is to postprocess the outcomes of models to obtain maps written in standard file formats. These maps contain plots of relevant quantities such as predicted ground load, expected deposit thickness and, for the case of or 3-D models, concentration on air or flight safety concentration thresholds

    Il progetto EPLORIS: La ricostruzione virtuale dell'eruzione del Vesuvio

    Get PDF
    The main objective of the Exploris project consists in the quantitative analysis of explosive eruption risk in densely populated EU volcanic regions and the evaluation of the likely effectiveness of possible mitigation measures through the development of volcanic risk facilities (such as supercomputer models, vulnerability databases, and probabilistic risk assessment protocols) and their application to high-risk European volcanoes. Exploris’ main ambition is to make a significant step forward in the assessment of explosive eruption risk in highly populated EU cities and islands. For this project, a new simulation model, based on fundamental transport laws to describe the 4D (3D spatial co-ordinates plus time) multiphase flow dynamics of explosive eruptions has been developed and parallelized in INGV and CINECA. Moreover, CINECA developed specific tools to efficiently visualise the results of simulations. This article presents the results of the large numerical simulations, carred out with CINECA’s Supercomputers, to describe the collapse of the volcanic eruption column and the propagation of pyroclastic density currents, for selected medium scale (sub-Plinian) eruptive scenarios at Vesuvius

    investigation of particle dynamics and classification mechanism in a spiral jet mill through computational fluid dynamics and discrete element methods

    Get PDF
    Abstract Predicting the outcome of jet-milling based on the knowledge of process parameters and starting material properties is a task still far from being accomplished. Given the technical difficulties in measuring thermodynamics, flow properties and particle statistics directly in the mills, modelling and simulations constitute alternative tools to gain insight in the process physics and many papers have been recently published on the subject. An ideal predictive simulation tool should combine the correct description of non-isothermal, compressible, high Mach number fluid flow, the correct particle-fluid and particle-particle interactions and the correct fracture mechanics of particle upon collisions but it is not currently available. In this paper we present our coupled CFD-DEM simulation results; while comparing them with the recent modelling and experimental works we will review the current understating of the jet-mill physics and particle classification. Subsequently we analyze the missing elements and the bottlenecks currently limiting the simulation technique as well as the possible ways to circumvent them towards a quantitative, predictive simulation of jet-milling

    An application of parallel computing to the simulation of volcanic eruptions

    Get PDF
    A parallel code for the simulation of the transient 3D dispersal of volcanic particles produced by explosive eruptions is presented. The model transport equations, based on the multiphase flow theory, describe the atmospheric dynamics of the gas-particle mixture ejected through the volcanic crater. The numerics is based on a finite-volume discretization scheme and a pressure-based iterative non-linear solver suited to compressible multiphase flows. The code has been parallelized by adopting an ad hoc domain partitioning scheme that enforces the load balancing. An optimized communication layer has been built over the Message-Passing Interface. The code proved to be remarkably efficient on several high-performance platforms and makes it possible to simulate fully 3D eruptive scenarios on realistic volcano topography

    Full configuration interaction approach to the few-electron problem in artificial atoms

    Full text link
    We present a new high-performance configuration interaction code optimally designed for the calculation of the lowest energy eigenstates of a few electrons in semiconductor quantum dots (also called artificial atoms) in the strong interaction regime. The implementation relies on a single-particle representation, but it is independent of the choice of the single-particle basis and, therefore, of the details of the device and configuration of external fields. Assuming no truncation of the Fock space of Slater determinants generated from the chosen single-particle basis, the code may tackle regimes where Coulomb interaction very effectively mixes many determinants. Typical strongly correlated systems lead to very large diagonalization problems; in our implementation, the secular equation is reduced to its minimal rank by exploiting the symmetry of the effective-mass interacting Hamiltonian, including square total spin. The resulting Hamiltonian is diagonalized via parallel implementation of the Lanczos algorithm. The code gives access to both wave functions and energies of first excited states. Excellent code scalability in a parallel environment is demonstrated; accuracy is tested for the case of up to eight electrons confined in a two-dimensional harmonic trap as the density is progressively diluted and correlation becomes dominant. Comparison with previous Quantum Monte Carlo simulations in the Wigner regime demonstrates power and flexibility of the method.Comment: RevTeX 4.0, 18 pages, 6 tables, 9 postscript b/w figures. Final version with new material. Section 6 on the excitation spectrum has been added. Some material has been moved to two appendices, which appear in the EPAPS web depository in the published versio

    EXSCALATE: An Extreme-Scale Virtual Screening Platform for Drug Discovery Targeting Polypharmacology to Fight SARS-CoV-2

    Get PDF
    The social and economic impact of the COVID-19 pandemic demands a reduction of the time required to find a therapeutic cure. In this paper, we describe the EXSCALATE molecular docking platform capable to scale on an entire modern supercomputer for supporting extreme-scale virtual screening campaigns. Such virtual experiments can provide in short time information on which molecules to consider in the next stages of the drug discovery pipeline, and it is a key asset in case of a pandemic. The EXSCALATE platform has been designed to benefit from heterogeneous computation nodes and to reduce scaling issues. In particular, we maximized the accelerators’ usage, minimized the communications between nodes, and aggregated the I/O requests to serve them more efficiently. Moreover, we balanced the computation across the nodes by designing an ad-hoc workflow based on the execution time prediction of each molecule. We deployed the platform on two HPC supercomputers, with a combined computational power of 81 PFLOPS, to evaluate the interaction between 70 billion of small molecules and 15 binding-sites of 12 viral proteins of SARS-CoV-2. The experiment lasted 60 hours and it performed more than one trillion ligand-pocket evaluations, setting a new record on the virtual screening scale
    • …
    corecore